On-Chip Learning of Memristor-Based Hardware

This page summarises key themes and recent works on training and learning directly on hardware platforms employing memristor-based (or resistive memory) arrays, focusing on on-chip weight updates, continual/meta learning, equilibrium propagation, and quantization-aware training.


1. On-chip Learning & Weight Updating (including continual learning + meta-learning)

“On-Chip Learning with Memristor-Based Neural Networks: Assessing Accuracy and Efficiency Under Device Variations, Conductance Errors, and Input Noise” — Eslami et al., 2024
Demonstrates an actual memristor-based compute-in-memory accelerator performing on-chip training (weight updates) for a simple neural network; the authors show robustness to device variability and input noise.
“The Ouroboros of Memristors: Neural Networks Facilitating Memristor Programming” — Yu et al., 2024
Explores how neural-network based mapping can be used to program memristor conductances efficiently (reducing delays, compensating device non-idealities) — relevant for weight updates and meta-learning on-chip.
“Nonideality-Aware Training for Accurate and Robust Low-Power Memristive Neural Networks” — Joksas et al., 2021
Focuses on training methods that explicitly account for device/circuit non-idealities in memristor networks (variation, nonlinearity) — critical for on-chip learning and continual adaptation.

Key concepts:


2. Equilibrium Propagation

“Equilibrium Propagation for Memristor-Based Recurrent Neural Networks” — Zoppo et al., 2020
Proposes and simulates how a memristor-based analog recurrent network can implement the learning rule called equilibrium propagation (EP): the network relaxes to a steady state (free phase) then is nudged (perturbed) and relaxes to a new state (nudged phase), and weight updates follow from the difference. The authors argue this is well suited for VLSI/analog memristor hardware.
“Quantum Equilibrium Propagation for Efficient Training of …” — Wanjura et al., 2025
Although not memristor-specific, this work extends equilibrium propagation to electronic systems and memristor crossbar arrays — showing broader interest in EP for hardware training.

Key points:


3. Quantization-Aware Training

“Hardware-Aware Quantization for Accurate Memristor-Based Neural Networks” — Diware et al., 2024
Illustrates how quantization-aware training (QAT) can be tailored for memristor hardware: incorporating conductance variance models, computational noise injection, and quantization during training to match device limitations.
“Mapping-aware Biased Training for Accurate Memristor-Based Neural Networks” — (extended version) 2024
Proposes training strategies that compensate for mapping biases when deploying quantised weights onto memristor conductances.
“Bulk-Switching Memristor-Based Compute-In-Memory …” — Wu et ., 2023
Demonstrates a memristor compute-in-memory array with quantization-aware support, including fake-quantization functions integrated into the training flow.

Highlights:


Quick Takeaways